65 research outputs found

    Developing a research method to test a new first-order decision making model for the procurement of public sector major infrastructure

    Get PDF
    Given global demand for new infrastructure, governments face substantial challenges in funding new infrastructure and simultaneously delivering Value for Money (VfM). The paper begins with an update on a key development in a new early/first-order procurement decision making model that deploys production cost/benefit theory and theories concerning transaction costs from the New Institutional Economics, in order to identify a procurement mode that is likely to deliver the best ratio of production costs and transaction costs to production benefits, and therefore deliver superior VfM relative to alternative procurement modes. In doing so, the new procurement model is also able to address the uncertainty concerning the relative merits of Public-Private Partnerships (PPP) and non-PPP procurement approaches. The main aim of the paper is to develop competition as a dependent variable/proxy for VfM and a hypothesis (overarching proposition), as well as developing a research method to test the new procurement model. Competition reflects both production costs and benefits (absolute level of competition) and transaction costs (level of realised competition) and is a key proxy for VfM. Using competition as a proxy for VfM, the overarching proposition is given as: When the actual procurement mode matches the predicted (theoretical) procurement mode (informed by the new procurement model), then actual competition is expected to match potential competition (based on actual capacity). To collect data to test this proposition, the research method that is developed in this paper combines a survey and case study approach. More specifically, data collection instruments for the surveys to collect data on actual procurement, actual competition and potential competition are outlined. Finally, plans for analysing this survey data are briefly mentioned, along with noting the planned use of analytical pattern matching in deploying the new procurement model and in order to develop the predicted (theoretical) procurement mode

    Elicited imitation test as a measure of L2 english learners’ interlanguage representations of relative clauses

    Get PDF
    This study examines the use of the Elicited Imitation Test (EIT) to measure second language learners’ underlying knowledge of restrictive relative clauses which will reflect their interlanguage representation of this property. It also investigates the acquisition of relative clauses by two groups of L2 English learners: L1 Malay and L1 Chinese speakers. This study employed two different testing instruments, i.e. the Elicited Imitation Test and the Grammaticality Judgement Test. The study follows the operational definitions established by Ellis (2004), for two constructs corresponding to implicit and explicit knowledge of the linguistic property being tested. Statistical analyses were carried out on the data obtained from the EIT and GJT. Results showed that learners were generally better at judging and imitating grammatical items in the both tests. Scores obtained were also comparable, indicating there was no significant difference between the mean scores obtained by the L1 Chinese and L1 Malay learners. However, it was discovered that when it came to ungrammatical items, learners were less determinate in their judgement and production. They were less proficient in their ability to imitate or judge and recast the ungrammatical items correctly. The results indicate that the L1 Malay and L1 Chinese learners of L2 English have interlanguage representations that differ from native speakers’ underlying representations of the said property. In addition, a correlation coefficient analysis was also conducted between the grammatical and ungrammatical items in both the EIT and GJT, to determine if a correlation exists. The results from the coefficient analysis showed no correlation between the ungrammatical items in both the tests. However, there was a correlation between the grammatical items in the EIT and GJT. This result suggests that the grammatical items in both the tests measure implicit knowledge. A suggestion is forwarded as to why the ungrammatical items in the tests did not correlat

    Crafting an efficient bundle of property rights to determine the suitability of a Public-Private Partnership: A new theoretical framework

    Get PDF
    A Public–Private Partnership (PPP) procurement mode is poised to play a leading role in delivering global infrastructure. However, there is no fundamental microeconomic framework to determine whether a project or part/s of a project is a suitable PPP. This paper presents the development of a new theoretical framework that overarches and harnesses the application and integration of prominent microeconomic theories, namely, transaction cost and resource-based theories, property rights theory and principal-agent theory, to explain how an efficient bundle of property rights, associated with externalised project activities, is configured or crafted. This novel framework is developed to contribute significantly to advancing the rigour and transparency of PPP selection, as well as advancing theory of the firm. In turn, this change in current PPP thinking would appreciably increase the prospect of PPPs efficiently addressing the substantial appetite for this mode of procurement

    Re-Examining the Association between Quality and Safety Performance in Construction: From Heterogeneous to Homogeneous Datasets

    Get PDF
    Recent research undertaken revealed that a significant positive relationship exists between quality and safety performance. A major limitation of this research, however, was the nature of the sample; it was heterogeneous (i.e., a combination of U.S. and international projects) and the sample was restricted to 18 projects. Building upon initial research, this paper re-examines the association between quality and safety using a homogeneous sample of 569 projects, which were derived from an Australian construction company with an annual turnover in excess of 1billionAustraliandollars(AU1 billion Australian dollars (AU). A total of 19,314 nonconformances and 17,783 injuries were used to determine the validity and reliability of previous research. A weak association between quality and safety performance was found (p<0.01). The p-values did not indicate any significant association between first aid and quality rates, except for the injury rate and rework frequency per million scope, which yielded an r-value of 0.307 and p-value 0.046 that is significant at 0.01 level. An association, however, between injuries and rework was identified (r2=0.70). The discrepancy between this research’s findings and that of previous work led to an examination of the issues of using ratios in correlation analysis. Thus, the statistical and arithmetic issues associated with the use of ratios are discussed, and it is recommended that estimating the relationships between quality and safety should be examined using regression techniques or analysis of covariance. Linear regression, therefore, was performed with the injury data as the dependent variable, and rework frequency and personnel hours as the independent variables. The regression results demonstrated that there is a significant association between injuries, and rework and personnel hours; it was revealed that both predictors accounted for 68.2% of the explained variability in injury frequency. The replication of the initial research has enabled a significant advancement in knowledge about relationship between quality and safety performance

    From Quality-I to Quality-II: cultivating an error culture to support lean thinking and rework mitigation in infrastructure projects

    Get PDF
    While lean thinking may help tackle waste, rework remains an ongoing problem during the construction of infrastructure projects. Often too much emphasis is placed on applying lean tools rather than harnessing the human factor and establishing a culture to mitigate rework. Thus, this paper proposes the need for construction organisations to transition from the prevailing error prevention culture (i.e. Quality-I) that pervades practice to one based on error management (i.e. Quality-II) if rework is to be contained and reduced. Accordingly, this paper asks: What type of error culture is required to manage errors that result in rework and to support lean thinking during the construction of infrastructure projects? We draw on the case of a program alliance of 129 water infrastructure projects and make sense of how it enacted, in addition to lean thinking, a change initiative to transition from error prevention to an error management culture to address its rework problem. We observed that leadership, psychological safety and coaching were pivotal for cultivating a culture where there was an acceptance that ‘errors happen’ and effort was directed at mitigating their adverse consequences. The contributions of this paper are twofold as we provide: (1) a new theoretical underpinning to mitigate rework and support the use of lean thinking during the construction of infrastructure projects grounded in Quality-II; and (2) practical suggestions, based on actual experiences, which can be readily employed to monitor and anticipate rework at the coalface of construction

    Antimicrobial resistance among migrants in Europe: a systematic review and meta-analysis

    Get PDF
    BACKGROUND: Rates of antimicrobial resistance (AMR) are rising globally and there is concern that increased migration is contributing to the burden of antibiotic resistance in Europe. However, the effect of migration on the burden of AMR in Europe has not yet been comprehensively examined. Therefore, we did a systematic review and meta-analysis to identify and synthesise data for AMR carriage or infection in migrants to Europe to examine differences in patterns of AMR across migrant groups and in different settings. METHODS: For this systematic review and meta-analysis, we searched MEDLINE, Embase, PubMed, and Scopus with no language restrictions from Jan 1, 2000, to Jan 18, 2017, for primary data from observational studies reporting antibacterial resistance in common bacterial pathogens among migrants to 21 European Union-15 and European Economic Area countries. To be eligible for inclusion, studies had to report data on carriage or infection with laboratory-confirmed antibiotic-resistant organisms in migrant populations. We extracted data from eligible studies and assessed quality using piloted, standardised forms. We did not examine drug resistance in tuberculosis and excluded articles solely reporting on this parameter. We also excluded articles in which migrant status was determined by ethnicity, country of birth of participants' parents, or was not defined, and articles in which data were not disaggregated by migrant status. Outcomes were carriage of or infection with antibiotic-resistant organisms. We used random-effects models to calculate the pooled prevalence of each outcome. The study protocol is registered with PROSPERO, number CRD42016043681. FINDINGS: We identified 2274 articles, of which 23 observational studies reporting on antibiotic resistance in 2319 migrants were included. The pooled prevalence of any AMR carriage or AMR infection in migrants was 25·4% (95% CI 19·1-31·8; I2 =98%), including meticillin-resistant Staphylococcus aureus (7·8%, 4·8-10·7; I2 =92%) and antibiotic-resistant Gram-negative bacteria (27·2%, 17·6-36·8; I2 =94%). The pooled prevalence of any AMR carriage or infection was higher in refugees and asylum seekers (33·0%, 18·3-47·6; I2 =98%) than in other migrant groups (6·6%, 1·8-11·3; I2 =92%). The pooled prevalence of antibiotic-resistant organisms was slightly higher in high-migrant community settings (33·1%, 11·1-55·1; I2 =96%) than in migrants in hospitals (24·3%, 16·1-32·6; I2 =98%). We did not find evidence of high rates of transmission of AMR from migrant to host populations. INTERPRETATION: Migrants are exposed to conditions favouring the emergence of drug resistance during transit and in host countries in Europe. Increased antibiotic resistance among refugees and asylum seekers and in high-migrant community settings (such as refugee camps and detention facilities) highlights the need for improved living conditions, access to health care, and initiatives to facilitate detection of and appropriate high-quality treatment for antibiotic-resistant infections during transit and in host countries. Protocols for the prevention and control of infection and for antibiotic surveillance need to be integrated in all aspects of health care, which should be accessible for all migrant groups, and should target determinants of AMR before, during, and after migration. FUNDING: UK National Institute for Health Research Imperial Biomedical Research Centre, Imperial College Healthcare Charity, the Wellcome Trust, and UK National Institute for Health Research Health Protection Research Unit in Healthcare-associated Infections and Antimictobial Resistance at Imperial College London

    Surgical site infection after gastrointestinal surgery in high-income, middle-income, and low-income countries: a prospective, international, multicentre cohort study

    Get PDF
    Background: Surgical site infection (SSI) is one of the most common infections associated with health care, but its importance as a global health priority is not fully understood. We quantified the burden of SSI after gastrointestinal surgery in countries in all parts of the world. Methods: This international, prospective, multicentre cohort study included consecutive patients undergoing elective or emergency gastrointestinal resection within 2-week time periods at any health-care facility in any country. Countries with participating centres were stratified into high-income, middle-income, and low-income groups according to the UN's Human Development Index (HDI). Data variables from the GlobalSurg 1 study and other studies that have been found to affect the likelihood of SSI were entered into risk adjustment models. The primary outcome measure was the 30-day SSI incidence (defined by US Centers for Disease Control and Prevention criteria for superficial and deep incisional SSI). Relationships with explanatory variables were examined using Bayesian multilevel logistic regression models. This trial is registered with ClinicalTrials.gov, number NCT02662231. Findings: Between Jan 4, 2016, and July 31, 2016, 13 265 records were submitted for analysis. 12 539 patients from 343 hospitals in 66 countries were included. 7339 (58·5%) patient were from high-HDI countries (193 hospitals in 30 countries), 3918 (31·2%) patients were from middle-HDI countries (82 hospitals in 18 countries), and 1282 (10·2%) patients were from low-HDI countries (68 hospitals in 18 countries). In total, 1538 (12·3%) patients had SSI within 30 days of surgery. The incidence of SSI varied between countries with high (691 [9·4%] of 7339 patients), middle (549 [14·0%] of 3918 patients), and low (298 [23·2%] of 1282) HDI (p < 0·001). The highest SSI incidence in each HDI group was after dirty surgery (102 [17·8%] of 574 patients in high-HDI countries; 74 [31·4%] of 236 patients in middle-HDI countries; 72 [39·8%] of 181 patients in low-HDI countries). Following risk factor adjustment, patients in low-HDI countries were at greatest risk of SSI (adjusted odds ratio 1·60, 95% credible interval 1·05–2·37; p=0·030). 132 (21·6%) of 610 patients with an SSI and a microbiology culture result had an infection that was resistant to the prophylactic antibiotic used. Resistant infections were detected in 49 (16·6%) of 295 patients in high-HDI countries, in 37 (19·8%) of 187 patients in middle-HDI countries, and in 46 (35·9%) of 128 patients in low-HDI countries (p < 0·001). Interpretation: Countries with a low HDI carry a disproportionately greater burden of SSI than countries with a middle or high HDI and might have higher rates of antibiotic resistance. In view of WHO recommendations on SSI prevention that highlight the absence of high-quality interventional research, urgent, pragmatic, randomised trials based in LMICs are needed to assess measures aiming to reduce this preventable complication

    The 2021 WHO catalogue of Mycobacterium tuberculosis complex mutations associated with drug resistance: a genotypic analysis.

    Get PDF
    Background: Molecular diagnostics are considered the most promising route to achievement of rapid, universal drug susceptibility testing for Mycobacterium tuberculosis complex (MTBC). We aimed to generate a WHO-endorsed catalogue of mutations to serve as a global standard for interpreting molecular information for drug resistance prediction. Methods: In this systematic analysis, we used a candidate gene approach to identify mutations associated with resistance or consistent with susceptibility for 13 WHO-endorsed antituberculosis drugs. We collected existing worldwide MTBC whole-genome sequencing data and phenotypic data from academic groups and consortia, reference laboratories, public health organisations, and published literature. We categorised phenotypes as follows: methods and critical concentrations currently endorsed by WHO (category 1); critical concentrations previously endorsed by WHO for those methods (category 2); methods or critical concentrations not currently endorsed by WHO (category 3). For each mutation, we used a contingency table of binary phenotypes and presence or absence of the mutation to compute positive predictive value, and we used Fisher's exact tests to generate odds ratios and Benjamini-Hochberg corrected p values. Mutations were graded as associated with resistance if present in at least five isolates, if the odds ratio was more than 1 with a statistically significant corrected p value, and if the lower bound of the 95% CI on the positive predictive value for phenotypic resistance was greater than 25%. A series of expert rules were applied for final confidence grading of each mutation. Findings: We analysed 41 137 MTBC isolates with phenotypic and whole-genome sequencing data from 45 countries. 38 215 MTBC isolates passed quality control steps and were included in the final analysis. 15 667 associations were computed for 13 211 unique mutations linked to one or more drugs. 1149 (7·3%) of 15 667 mutations were classified as associated with phenotypic resistance and 107 (0·7%) were deemed consistent with susceptibility. For rifampicin, isoniazid, ethambutol, fluoroquinolones, and streptomycin, the mutations' pooled sensitivity was more than 80%. Specificity was over 95% for all drugs except ethionamide (91·4%), moxifloxacin (91·6%) and ethambutol (93·3%). Only two resistance mutations were identified for bedaquiline, delamanid, clofazimine, and linezolid as prevalence of phenotypic resistance was low for these drugs. Interpretation: We present the first WHO-endorsed catalogue of molecular targets for MTBC drug susceptibility testing, which is intended to provide a global standard for resistance interpretation. The existence of this catalogue should encourage the implementation of molecular diagnostics by national tuberculosis programmes. Funding: Unitaid, Wellcome Trust, UK Medical Research Council, and Bill and Melinda Gates Foundation

    Effects of antiplatelet therapy after stroke due to intracerebral haemorrhage (RESTART): a randomised, open-label trial

    Get PDF
    BACKGROUND: Antiplatelet therapy reduces the risk of major vascular events for people with occlusive vascular disease, although it might increase the risk of intracranial haemorrhage. Patients surviving the commonest subtype of intracranial haemorrhage, intracerebral haemorrhage, are at risk of both haemorrhagic and occlusive vascular events, but whether antiplatelet therapy can be used safely is unclear. We aimed to estimate the relative and absolute effects of antiplatelet therapy on recurrent intracerebral haemorrhage and whether this risk might exceed any reduction of occlusive vascular events. METHODS: The REstart or STop Antithrombotics Randomised Trial (RESTART) was a prospective, randomised, open-label, blinded endpoint, parallel-group trial at 122 hospitals in the UK. We recruited adults (≄18 years) who were taking antithrombotic (antiplatelet or anticoagulant) therapy for the prevention of occlusive vascular disease when they developed intracerebral haemorrhage, discontinued antithrombotic therapy, and survived for 24 h. Computerised randomisation incorporating minimisation allocated participants (1:1) to start or avoid antiplatelet therapy. We followed participants for the primary outcome (recurrent symptomatic intracerebral haemorrhage) for up to 5 years. We analysed data from all randomised participants using Cox proportional hazards regression, adjusted for minimisation covariates. This trial is registered with ISRCTN (number ISRCTN71907627). FINDINGS: Between May 22, 2013, and May 31, 2018, 537 participants were recruited a median of 76 days (IQR 29-146) after intracerebral haemorrhage onset: 268 were assigned to start and 269 (one withdrew) to avoid antiplatelet therapy. Participants were followed for a median of 2·0 years (IQR [1·0- 3·0]; completeness 99·3%). 12 (4%) of 268 participants allocated to antiplatelet therapy had recurrence of intracerebral haemorrhage compared with 23 (9%) of 268 participants allocated to avoid antiplatelet therapy (adjusted hazard ratio 0·51 [95% CI 0·25-1·03]; p=0·060). 18 (7%) participants allocated to antiplatelet therapy experienced major haemorrhagic events compared with 25 (9%) participants allocated to avoid antiplatelet therapy (0·71 [0·39-1·30]; p=0·27), and 39 [15%] participants allocated to antiplatelet therapy had major occlusive vascular events compared with 38 [14%] allocated to avoid antiplatelet therapy (1·02 [0·65-1·60]; p=0·92). INTERPRETATION: These results exclude all but a very modest increase in the risk of recurrent intracerebral haemorrhage with antiplatelet therapy for patients on antithrombotic therapy for the prevention of occlusive vascular disease when they developed intracerebral haemorrhage. The risk of recurrent intracerebral haemorrhage is probably too small to exceed the established benefits of antiplatelet therapy for secondary prevention. FUNDING: British Heart Foundation

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570
    • 

    corecore